Àá½Ã¸¸ ±â´Ù·Á ÁÖ¼¼¿ä. ·ÎµùÁßÀÔ´Ï´Ù.
KMID : 1022420150070040027
Phonetics and Speech Sciences
2015 Volume.7 No. 4 p.27 ~ p.33
Audio Event Classification Using Deep Neural Networks
Lim Min-Kyu

Lee Dong-Hyun
Kim Kwang-Ho
Kim Ji-Hwan
Abstract
This paper proposes an audio event classification method using Deep Neural Networks (DNN). The proposed method applies Feed Forward Neural Network (FFNN) to generate event probabilities of ten audio events (dog barks, engine idling, and so on) for each frame. For each frame, mel scale filter bank features of its consecutive frames are used as the input vector of the FFNN. These event probabilities are accumulated for the events and the classification result is determined as the event with the highest accumulated probability. For the same dataset, the best accuracy of previous studies was reported as about 70% when the Support Vector Machine (SVM) was applied. The best accuracy of the proposed method achieves as 79.23% for the UrbanSound8K dataset when 80 mel scale filter bank features each from 7 consecutive frames (in total 560) were implemented as the input vector for the FFNN with two hidden layers and 2,000 neurons per hidden layer. In this configuration, the rectified linear unit was suggested as its activation function.
KEYWORD
audio event classification, deep neural networks, mel scale filter bank
FullTexts / Linksout information
Listed journal information
ÇмúÁøÈïÀç´Ü(KCI)